Automatic font generation without human experts is a practical and significant problem, especially for some languages that consist of a large number of characters. Existing methods for font generation are often in supervised learning. They require a large number of paired data, which are labor-intensive and expensive to collect. In contrast, common unsupervised image-to-image translation methods are not applicable to font generation, as they often define style as the set of textures and colors. In this work, we propose a robust deformable generative network for unsupervised font generation (abbreviated as DGFont++). We introduce a feature deformation skip connection (FDSC) to learn local patterns and geometric transformations between fonts. The FDSC predicts pairs of displacement maps and employs the predicted maps to apply deformable convolution to the low-level content feature maps. The outputs of FDSC are fed into a mixer to generate final results. Moreover, we introduce contrastive self-supervised learning to learn a robust style representation for fonts by understanding the similarity and dissimilarities of fonts. To distinguish different styles, we train our model with a multi-task discriminator, which ensures that each style can be discriminated independently. In addition to adversarial loss, another two reconstruction losses are adopted to constrain the domain-invariant characteristics between generated images and content images. Taking advantage of FDSC and the adopted loss functions, our model is able to maintain spatial information and generates high-quality character images in an unsupervised manner. Experiments demonstrate that our model is able to generate character images of higher quality than state-of-the-art methods.
translated by 谷歌翻译
Font generation is a difficult and time-consuming task, especially in those languages using ideograms that have complicated structures with a large number of characters, such as Chinese. To solve this problem, few-shot font generation and even one-shot font generation have attracted a lot of attention. However, most existing font generation methods may still suffer from (i) large cross-font gap challenge; (ii) subtle cross-font variation problem; and (iii) incorrect generation of complicated characters. In this paper, we propose a novel one-shot font generation method based on a diffusion model, named Diff-Font, which can be stably trained on large datasets. The proposed model aims to generate the entire font library by giving only one sample as the reference. Specifically, a large stroke-wise dataset is constructed, and a stroke-wise diffusion model is proposed to preserve the structure and the completion of each generated character. To our best knowledge, the proposed Diff-Font is the first work that developed diffusion models to handle the font generation task. The well-trained Diff-Font is not only robust to font gap and font variation, but also achieved promising performance on difficult character generation. Compared to previous font generation methods, our model reaches state-of-the-art performance both qualitatively and quantitatively.
translated by 谷歌翻译
Obtaining the position of ego-vehicle is a crucial prerequisite for automatic control and path planning in the field of autonomous driving. Most existing positioning systems rely on GPS, RTK, or wireless signals, which are arduous to provide effective localization under weak signal conditions. This paper proposes a real-time positioning system based on the detection of the parking numbers as they are unique positioning marks in the parking lot scene. It does not only can help with the positioning with open area, but also run independently under isolation environment. The result tested on both public datasets and self-collected dataset show that the system outperforms others in both performances and applies in practice. In addition, the code and dataset will release later.
translated by 谷歌翻译
可控的人图像合成任务可以通过对身体姿势和外观的明确控制来实现广泛的应用。在本文中,我们提出了一个基于跨注意的样式分布模块,该模块在源语义样式和目标姿势转移的目标姿势之间计算。该模块故意选择每个语义表示的样式,并根据目标姿势分配它们。交叉注意的注意力矩阵表达了目标姿势与所有语义的源样式之间的动态相似性。因此,可以利用它来从源图像路由颜色和纹理,并受到目标解析图的进一步限制,以实现更清晰的目标。同时,为了准确编码源外观,还添加了不同语义样式之间的自我注意力。我们的模型的有效性在姿势转移和虚拟的尝试任务上进行了定量和质量验证。
translated by 谷歌翻译
Most existing image inpainting algorithms are based on a single view, struggling with large holes or the holes containing complicated scenes. Some reference-guided algorithms fill the hole by referring to another viewpoint image and use 2D image alignment. Due to the camera imaging process, simple 2D transformation is difficult to achieve a satisfactory result. In this paper, we propose 3DFill, a simple and efficient method for reference-guided image inpainting. Given a target image with arbitrary hole regions and a reference image from another viewpoint, the 3DFill first aligns the two images by a two-stage method: 3D projection + 2D transformation, which has better results than 2D image alignment. The 3D projection is an overall alignment between images and the 2D transformation is a local alignment focused on the hole region. The entire process of image alignment is self-supervised. We then fill the hole in the target image with the contents of the aligned image. Finally, we use a conditional generation network to refine the filled image to obtain the inpainting result. 3DFill achieves state-of-the-art performance on image inpainting across a variety of wide view shifts and has a faster inference speed than other inpainting models.
translated by 谷歌翻译
最近,几种基于空间内存的方法已经验证了将中间框架及其面具作为内存有助于将视频中的目标对象细分目标对象。但是,它们主要集中于当前帧和内存框架之间的更好匹配,而无需明确关注内存质量。因此,较差的分割面罩的框架容易被记住,这导致了分割掩盖误差问题并进一步影响分割性能。此外,随着帧数的增长,内存框架的线性增加还限制了模型处理长视频的能力。为此,我们提出了一个质量感知的动态内存网络(QDMN)来评估每个帧的分割质量,从而使内存库可以选择性地存储准确的分段框架,以防止误差积累问题。然后,我们将细分质量与时间一致性相结合,以动态更新内存库以提高模型的实用性。我们的QDMN没有任何铃铛和哨子,在戴维斯和YouTube-Vos基准测试中都取得了新的最新性能。此外,广泛的实验表明,提议的质量评估模块(QAM)可以作为通用插件应用于基于内存的方法,并显着提高性能。我们的源代码可在https://github.com/workforai/qdmn上找到。
translated by 谷歌翻译
我们讨论了具有未知IV有效性的线性仪器变量(IV)模型中识别的基本问题。我们重新审视了流行的多数和多元化规则,并表明通常没有识别条件是“且仅在总体上”。假设“最稀少的规则”,该规则等同于多数规则,但在计算算法中变得运作,我们研究并证明了基于两步选择的其他IV估计器的非convex惩罚方法的优势,就两步选择而言选择一致性和单独弱IV的适应性。此外,我们提出了一种与识别条件保持一致的替代较低的惩罚,并同时提供甲骨文稀疏结构。与先前的文献相比,针对静脉强度较弱的估计仪得出了理想的理论特性。使用模拟证明了有限样本特性,并且选择和估计方法应用于有关贸易对经济增长的影响的经验研究。
translated by 谷歌翻译
声源本地化旨在从观察到的多通道音频寻求所有声源的到达方向(DOA)。对于未知数量来源的实际问题,现有的本地化算法试图预测基于似然的编码(即空间频谱),并采用预先确定的阈值来检测源编号和相应的DOA值。但是,这些基于阈值的算法不稳定,因为它们受到仔细选择阈值的限制。为了解决此问题,我们提出了一种称为ISSL的迭代声源本地化方法,该方法可以迭代地提取每个源的DOA而无需阈值,直到满足终止标准为止。与基于阈值的算法不同,ISSL设计基于二进制分类器的活动源检测器网络,以接受残留的空间频谱并决定是否停止迭代。通过这样做,我们的ISSL可以处理任意数量的来源,甚至超过培训阶段中看到的来源数量。实验结果表明,与现有的基于阈值的算法相比,我们的ISSL在DOA估计和源数检测方面都取得了重大的性能提高。
translated by 谷歌翻译
超宽带(UWB)基于到达的时间差异(TDOA)的定位最近已成为一种有希望的,低成本和可扩展的室内定位解决方案,这特别适合多机器人应用。但是,似乎缺乏公共数据集来基准在混乱的室内环境中新兴的UWB TDOA定位技术。为了填补这一空白,我们提供了一个全面的数据集由UWB TDOA识别实验和基于DeCawave的DWM1000 UWB模块的飞行实验组成。在识别实验中,我们在各种视线(LOS)和非线(NLOS)条件下收集了低级信号信息,包括信噪比(SNR)和功率差值。对于飞行实验,我们使用四个不同的锚点进行了累积的$ \ sim $ 150分钟的现实飞行,平均速度为0.45 m/s。在飞行过程中收集了包括UWB TDOA,惯性测量单元(IMU),光流,飞行时间(TOF)激光器和毫米精度的地面真相数据在内的原始传感器数据。数据集和开发套件可在https://utiasdsl.github.io/util-uwb-dataset/上获得。
translated by 谷歌翻译
基于深度神经网络的时间序列分类方法很容易在UCR数据集上过度拟合,这是由这些数据集的几次拍摄问题引起的。因此,为了减轻进一步提高准确性的过度拟合现象,我们首先提出标记为IncepionTime(LSTIME)的标记平滑,这与软标签的信息相比,与硬标签相比。接下来,提出了通过LSTIME手动调整软标签,提出了成立时间(KDTIME)的知识蒸馏,以便通过教师模型自动生成软标签。最后,为了纠正来自教师模型的错误预测的软标签,提出了具有成立时间(KDCTIME)的校准的知识蒸馏,在其中包含两个可选的校准策略,即通过重新排序(KDCR)通过翻译(KDCT)和KDC的可选校准策略(KDC)(KDCR )。实验结果表明,KDCTIME的准确性很有前景,而其推理时间比火箭速度快两个数量级,具有可接受的训练时间开销。
translated by 谷歌翻译